perm filename MULTI[E81,JMC] blob
sn#613629 filedate 1981-09-24 generic text, type C, neo UTF8
COMMENT ā VALID 00002 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 multi[e81,jmc] Proposals for multi-processing in S-1 LISP
C00009 ENDMK
Cā;
multi[e81,jmc] Proposals for multi-processing in S-1 LISP
1. These proposals place requirements on the S-1 operating system
and on LISP itself.
I have not yet investigated whether the present operating system
plans provide for suitable facilities.
2. The basic form of multi-processing proposed here is that
certain LISP constructions create queues of tasks. When there
are tasks on the queue, the operating system must know it and
be able to assign processors to work on them. We assume that
there is no master processor. It is just that a processor, either
when it completes a task or at certain clock interrupts, executes
operating system code that may make it pick up one of these tasks.
Within the LISP program, the priority of tasks is a matter for the
program itself, although there should be defaults if the programmer
doesn't want to take matters fully into his own hands.
3. There are only sixteen processors in a maximal S-1 configuration,
and a large program should easily be able to spawn sixteen tasks
that can be done in parallel. Moreover, the control of multi-processing
is likely to be computationally expensive and involve context
switching which is expensive. Therefore, not every task that can
be done in parallel should be queued for parallel operation.
4. One rule is that all parallel processing should be allowed by
specific constructs. Anything written in COMMON LISP will not
excite parallelism. Of course, there can be pre-processors that
take COMMON LISP programs and find opportunities for parallelism
and rewrite the program in a way that calls for queuing.
5. One form of parallelism is to compute in parallel the arguments
of a function. Functions that permit this will be FEXPRs or FSUBRs
and will contain an extra argument that tells whether this particular
call is to be queued. This permits parallelism to be decided
dynamically. It seems to me that this is important, because whether
we want to compute the arguments in parallel depends not merely
on the function but on whether the computation of these particular
arguments is expected to involve enough computation to justify
calling for the help of other processors. The programmer can
force parallelism or non-parallelism simply by giving the
parameter the value t or f. Let's call this argument QLET for
Queue LET. The boolean expression serving as the QLET argument
will ordinarily have the form (and GQLET ...) where GQLET is a
global parameter that is set to () when all queuing is to be
inhibited, for example during early stages of debugging.
6. There can be a form of lambda expression (perhaps called FLAMBDA)
that has a QLET argument.
Maybe FLAMBDA can be the main tool for parallel arguments, and the
FSUBRs can be constructed using it. Thus
(defun fbaz (x ... z QLET)
((QLAMBDA QLET (u ... v) (baz u ... v)) x ... z)).
7. Other LISP constructs such as DO need queueable forms with QLET
parameters. Perhaps FLAMBDA can be used to construct many of them
from ordinary forms or some other general queuability constructor
can be found. In any case it should have a QLET argument.
8. The most important form of parallelism occurs when tasks are
being ANDed, i.e. all of the parallel tasks are to be completed
before the computation can continue. However, sometimes we have
an OR of tasks such as a parallel search for the solution to
a subproblem in which the first process to find the solution must
turn off the other searchers. In general parallel search is to be
avoided, because if we can start with the most likely prospect,
serial searching will involve less expected total compute time
than parallel search. However, parallel search may be justified
when real time is important.
Moreover, we may have an "almost AND" of tasks. This occurs when
the task are normally ANDed, but an error condition in one of the
tasks may require the suspension of the others.
All this requires that the tasks must be subject to interrupts.
It would seem that any process should be able to generate an
interrupt. This would cause each processe to go its interrupt
routine which would determine whether it would continue or would
THROW to a suitable CATCH statement. The interrupt could then
be a conditional THROW and could be a parameter of a somewhat
generalized CATCH.